# instruction fine-tuning
Aya 23 8B
Aya-23 is an open-weight research version of an instruction fine-tuned model with highly advanced multilingual capabilities, supporting 23 languages.
Large Language Model
Transformers Supports Multiple Languages

A
CohereLabs
10.28k
415
Mistral Small 3.1 24B Instruct 2503 GPTQ 4b 128g
Apache-2.0
This model is an INT4 quantized version of Mistral-Small-3.1-24B-Instruct-2503, using the GPTQ algorithm to reduce weights from 16-bit to 4-bit, significantly decreasing disk size and GPU memory requirements.
Large Language Model
M
ISTA-DASLab
21.89k
13
Gemma 3 27b It Quantized W4A16
Gemma 3 is an instruction-tuned large language model developed by Google. This repository provides its 27B parameter W4A16 quantized version, significantly reducing hardware requirements
Large Language Model
Transformers

G
abhishekchohan
640
4
Olmo 2 0325 32B Instruct 4bit
Apache-2.0
This is a 4-bit quantized version converted from the allenai/OLMo-2-0325-32B-Instruct model, optimized for the MLX framework and suitable for text generation tasks.
Large Language Model
Transformers English

O
mlx-community
270
10
Llama 3 8B Instruct 32k V0.1 GGUF
GGUF quantized version of Llama-3-8B-Instruct-32k-v0.1, supporting multi-bit quantization, suitable for text generation tasks.
Large Language Model
L
MaziyarPanahi
226.09k
57
Norocetacean 20B 10k GGUF
Other
Norocetacean 20B 10K is a large language model based on the Llama 2 architecture, fine-tuned to support Chinese tasks.
Large Language Model
N
TheBloke
3,364
6
Featured Recommended AI Models